open question
Why Can't You Finish Anything?
The skills needed for wrapping up aren't always what you expect. My house contains a vaguely defined room--a parlor-like space that was created by a renovation decades ago. After my son was born, it served as a playroom, full of baby and toddler toys. Then it became a nook where, late at night, my wife and I could listen to music and read. That equilibrium held until the Legos and board games arrived; their incursion was the beginning of the end.
- North America > United States > New York (0.06)
- North America > United States > Maine (0.04)
- North America > United States > California (0.04)
- Europe > Ireland (0.04)
Constant Regret, Generalized Mixability, and Mirror Descent
We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and ``mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} $\operatorname{S}$. For a given entropy $\Phi$, losses for which a constant regret is possible using the \textsc{GAA} are called $\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an open question. We fully characterize $\Phi$-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy $\operatorname{S}$ is fundamental in nature when it comes to mixability; any $\Phi$-mixable loss is necessarily $\operatorname{S}$-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.
9b8b50fb590c590ffbf1295ce92258dc-AuthorFeedback.pdf
For example, when solving RL problems such as Atari7 games, we may test different representation methods. Fortheaveragereward30 setting, it is still an open question whether S-bounds areachievable. Ourapproach canbeadapted totheepisodic31 case when the regret bounds would benefit from the improved bounds available in this setting. The A-dependence is optimal as for UCRL2, while the optimal dependence onS is still an open question (also46 for the MDP case). The optimal dependence on|Φ| in our setting is also open.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
open questions like, lower bounds, private information, and real-valued feedback, pointed out by reviewers
We thank reviewers for detailed comments and suggestions. We will address all comments in the revision. AIStats'19) considered the problem of learning an optimal action but ignored the contextual information. In this work, we incorporated the contextual information, which is readily available in many applications. The idea might look incremental.
Is Good Taste a Trap?
Is Good Taste a Trap? The judgments we use to elevate our lives can also hem them in. In Belle Burden's memoir, " Strangers," she describes the end of her marriage. It happened suddenly: until learning of her husband's infidelity, through a voice mail from a stranger, she had no idea anything was wrong. Burden and her husband shared an apartment in Tribeca and a house on Martha's Vineyard.
- North America > United States > California (0.14)
- North America > United States > New York (0.06)
- Europe > Portugal > Lisbon > Lisbon (0.05)
- Europe > United Kingdom > Scotland (0.04)
3 Common Misunderstandings About AI in 2025
Children and parked cars are color-coded on a monitor inside a Mercedes-Benz S-Class during an autonomous driving and AI demonstration in Immendingen, Germany on July 17, 2018. Children and parked cars are color-coded on a monitor inside a Mercedes-Benz S-Class during an autonomous driving and AI demonstration in Immendingen, Germany on July 17, 2018. In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year. When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall.
- Europe > Germany (0.46)
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (3 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
Why A.I. Didn't Transform Our Lives in 2025
This was supposed to be the year when autonomous agents took over everyday tasks. One year ago, Sam Altman, the C.E.O. of OpenAI, made a bold prediction: "We believe that, in 2025, we may see the first AI agents'join the workforce' and materially change the output of companies." A couple of weeks later, the company's chief product officer, Kevin Weil, said at the World Economic Forum conference at Davos in January, "I think 2025 is the year that we go from ChatGPT being this super smart thing . . . to ChatGPT doing things in the real world for you." He gave examples of artificial intelligence filling out online forms and booking restaurant reservations. He later promised, "We're going to be able to do that, no question."
- North America > United States > California (0.15)
- North America > United States > New York (0.04)
- North America > Mexico (0.04)
- Atlantic Ocean > Gulf of Mexico (0.04)
- Leisure & Entertainment (0.95)
- Consumer Products & Services (0.95)
- Banking & Finance > Economy (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.90)